机器人的不安全状态是在舞台上。有关于主要机器人脆弱性及其不利后果的新兴担忧。但是,机器人和网络安全域之间仍有相当大的差距。为了填补这种差距,目前的技术报告提供了机器人CTF(RCTF),一个在线游乐场,用于从任何浏览器中挑战机器人安全性。我们描述了RCTF的架构,并提供了9个方案,黑客可以挑战不同机器人设置的安全性。我们的工作使安全研究人员提供给a)本地复制虚拟机器人方案,b)将网络设置改为模拟真实机器人目标。我们倡导机器人中的黑客动力安全,并通过开放采购我们的场景贡献。
translated by 谷歌翻译
机器人通常不会以安全为主要问题创建。对比典型IT系统,私人系统依赖于安全性来处理安全方面。鉴于前者,诸如常见漏洞评分系统(CVS)之类的经典评分方法无法准确捕获机器人漏洞的严重程度。目前的研究工作侧重于创建一个开放,自由地访问机器人漏洞评分系统(RVSS),该系统(RVSS)考虑机器人中的主要相关问题,包括a)机器人安全方面,b)对给定漏洞,c)图书馆和第三个漏洞的下游影响的评估-Party评分评估和D)环境变量,例如自漏洞泄露或网络上的曝光率。最后,提供了与CVSS对比的RVSS的实验评估,并侧重于专注于机器人安全景观。
translated by 谷歌翻译
机器人景观正在经历大变化。机器人正在蔓延,很快就会到处。传统上用于工业的系统正在被协作机器人所取代,而在人们的日常活动中介绍了越来越多的专业和消费机器人。机器人越来越多地与它的其他方面交织在一起,并设想以获得更多的自主权,与人类身体相互作用。我们声称,遵循个人计算机(PC)和智能手机,机器人是下一个技术革命,但制造商正在忽略机器人安全性。本文旨在警惕不仅有安全处理的需求,而是从即将到来的技术时代开始的机器人安全性。我们在此提供了一份评论机器人危险的文件,并分析了不面临这些问题的后果。我们强烈地提倡安全 - 首先是必须立即实施的安全方法。
translated by 谷歌翻译
机器人在社会中取得了相关性,越来越越来越关注关键任务。尽管如此,机器人安全性被低估了。机器人安全性是一种复杂的景观,通常需要一个跨纪的横向落后的横向学科视角。要解决此问题,我们介绍了机器人安全框架(RSF),一种方法,用于在机器人中执行系统安全评估。我们提出,调整和开发特定术语,并提供了在四个主要层次(物理,网络,固件和应用程序)之后实现整体安全评估的指南。我们认为现代机器人应视为同样相关的内部和外部沟通安全。最后,我们倡导“通过默默无闻的安全”。我们得出结论,机器人中的安全领域值得进一步的研究努力。
translated by 谷歌翻译
Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas. In deep learning, a known solution is error backpropagation, which however requires biologically implausible weight transport from feed-forward to feedback paths. We introduce Phaseless Alignment Learning (PAL), a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies. This is achieved by exploiting the noise naturally found in biophysical systems as an additional carrier of information. In our dynamical system, all weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses. Our method is completely phase-free (no forward and backward passes or phased learning) and allows for efficient error propagation across multi-layer cortical hierarchies, while maintaining biologically plausible signal transport and learning. Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment: compared to random synaptic feedback, it can solve complex tasks with less neurons and learn more useful latent representations. We demonstrate this on various classification tasks using a cortical microcircuit model with prospective coding.
translated by 谷歌翻译
Language models (LMs) have demonstrated remarkable performance on downstream tasks, using in-context exemplars or human instructions. Recent works have shown that chain-of-thought (CoT) prompting can elicit models to solve complex reasoning tasks, step-by-step. However, the efficacy of prompt-based CoT methods is restricted to very large LMs such as GPT-3 (175B), thus limiting deployability. In this paper, we revisit the fine-tuning approach to enable complex reasoning in smaller LMs, optimized to efficiently perform a specific task. We propose Fine-tune-CoT, a method that leverages the capabilities of very large LMs to generate reasoning samples and teach smaller models via fine-tuning. We evaluate our method on publicly available LMs across a wide range of complex tasks and model sizes. We find that Fine-tune-CoT enables substantial reasoning capability in small models, whereas previous prompt-based baselines exhibit near-random performance. Student models can even outperform the teacher in some tasks while reducing model size requirements by several orders of magnitude. We conduct extensive ablations and sample studies to understand the reasoning capabilities of student models. We also identify several important nuances that have been overlooked in concurrent fine-tuning works on CoT and address them in our analysis.
translated by 谷歌翻译
After just a few hundred training updates, a standard probabilistic model for language generation has likely not yet learnt many semantic or syntactic rules of natural language, which inherently makes it difficult to estimate the right probability distribution over next tokens. Yet around this point, these models have identified a simple, loss-minimising behaviour: to output the unigram distribution of the target training corpus. The use of such a crude heuristic raises the question: Rather than wasting precious compute resources and model capacity for learning this strategy at early training stages, can we initialise our models with this behaviour? Here, we show that we can effectively endow our model with a separate module that reflects unigram frequency statistics as prior knowledge. Standard neural language generation architectures offer a natural opportunity for implementing this idea: by initialising the bias term in a model's final linear layer with the log-unigram distribution. Experiments in neural machine translation demonstrate that this simple technique: (i) improves learning efficiency; (ii) achieves better overall performance; and (iii) appears to disentangle strong frequency effects, encouraging the model to specialise in non-frequency-related aspects of language.
translated by 谷歌翻译
Heteroscedastic regression models a Gaussian variable's mean and variance as a function of covariates. Parametric methods that employ neural networks for these parameter maps can capture complex relationships in the data. Yet, optimizing network parameters via log likelihood gradients can yield suboptimal mean and uncalibrated variance estimates. Current solutions side-step this optimization problem with surrogate objectives or Bayesian treatments. Instead, we make two simple modifications to optimization. Notably, their combination produces a heteroscedastic model with mean estimates that are provably as accurate as those from its homoscedastic counterpart (i.e.~fitting the mean under squared error loss). For a wide variety of network and task complexities, we find that mean estimates from existing heteroscedastic solutions can be significantly less accurate than those from an equivalently expressive mean-only model. Our approach provably retains the accuracy of an equally flexible mean-only model while also offering best-in-class variance calibration. Lastly, we show how to leverage our method to recover the underlying heteroscedastic noise variance.
translated by 谷歌翻译
Active target sensing is the task of discovering and classifying an unknown number of targets in an environment and is critical in search-and-rescue missions. This paper develops a deep reinforcement learning approach to plan informative trajectories that increase the likelihood for an uncrewed aerial vehicle (UAV) to discover missing targets. Our approach efficiently (1) explores the environment to discover new targets, (2) exploits its current belief of the target states and incorporates inaccurate sensor models for high-fidelity classification, and (3) generates dynamically feasible trajectories for an agile UAV by employing a motion primitive library. Extensive simulations on randomly generated environments show that our approach is more efficient in discovering and classifying targets than several other baselines. A unique characteristic of our approach, in contrast to heuristic informative path planning approaches, is that it is robust to varying amounts of deviations of the prior belief from the true target distribution, thereby alleviating the challenge of designing heuristics specific to the application conditions.
translated by 谷歌翻译
Nucleolar organizer regions (NORs) are parts of the DNA that are involved in RNA transcription. Due to the silver affinity of associated proteins, argyrophilic NORs (AgNORs) can be visualized using silver-based staining. The average number of AgNORs per nucleus has been shown to be a prognostic factor for predicting the outcome of many tumors. Since manual detection of AgNORs is laborious, automation is of high interest. We present a deep learning-based pipeline for automatically determining the AgNOR-score from histopathological sections. An additional annotation experiment was conducted with six pathologists to provide an independent performance evaluation of our approach. Across all raters and images, we found a mean squared error of 0.054 between the AgNOR- scores of the experts and those of the model, indicating that our approach offers performance comparable to humans.
translated by 谷歌翻译